162 research outputs found

    Evaluating Example-based Pose Estimation: Experiments on the HumanEva Sets

    Get PDF
    We present an example-based approach to pose recovery, using histograms of oriented gradients as image descriptors. Tests on the HumanEva-I and HumanEva-II data sets provide us insight into the strengths and limitations of an example-based approach. We report mean relative 3D errors of approximately 65 mm per joint on HumanEva-I, and 175 mm on HumanEva-II. We discuss our results using single and multiple views. Also, we perform experiments to assess the algorithm’s generalization to unseen subjects, actions and viewpoints. We plan to incorporate the temporal aspect of human motion analysis to reduce orientation ambiguities, and increase the pose recovery accuracy

    Preface: Facial and Bodily Expressions for Control and Adaptation of Games

    Get PDF

    Online backchannel synthesis evaluation with the switching Wizard of Oz

    Get PDF
    In this paper, we evaluate a backchannel synthesis algorithm in an online conversation between a human speaker and a virtual listener. We adopt the Switching Wizard of Oz (SWOZ) approach to assess behavior synthesis algorithms online. A human speaker watches a virtual listener that is either controlled by a human listener or by an algorithm. The source switches at random intervals. Speakers indicate when they feel they are no longer talking to a human listener. Analysis of these responses reveals patterns of inappropriate behavior in terms of quantity and timing of backchannels

    Backchannels: Quantity, Type and Timing Matters

    Get PDF
    In a perception experiment, we systematically varied the quantity, type and timing of backchannels. Participants viewed stimuli of a real speaker side-by-side with an animated listener and rated how human-like they perceived the latter's backchannel behavior. In addition, we obtained measures of appropriateness and optionality for each backchannel from key strokes. This approach allowed us to analyze the influence of each of the factors on entire fragments and on individual backchannels. The originally performed type and timing of a backchannel appeared to be more human-like, compared to a switched type or random timing. In addition, we found that nods are more often appropriate than vocalizations. For quantity, too few or too many backchannels per minute appeared to reduce the quality of the behavior. These findings are important for the design of algorithms for the automatic generation of backchannel behavior for artificial listeners

    Towards real-time body pose estimation for presenters in meeting environments

    Get PDF
    This paper describes a computer vision-based approach to body pose estimation.\ud The algorithm can be executed in real-time and processes low resolution,\ud monocular image sequences. A silhouette is extracted and matched against a\ud projection of a 16 DOF human body model. In addition, skin color is used to\ud locate hands and head. No detailed human body model is needed. We evaluate the\ud approach both quantitatively using synthetic image sequences and qualitatively\ud on video test data of short presentations. The algorithm is developed with the\ud aim of using it in the context of a meeting room where the poses of a presenter\ud have to be estimated. The results can be applied in the domain of virtual\ud environments

    Discriminative vision-based recovery and recognition of human motion

    Get PDF
    The automatic analysis of human motion from images opens up the way for applications in the domains of security and surveillance, human-computer interaction, animation, retrieval and sports motion analysis. In this dissertation, the focus is on robust and fast human pose recovery and action recognition. The former is a regression task where the aim is to determine the locations of key joints in the human body, given an image of a human figure. The latter is the process of labeling image sequences with action labels, a classification task.\ud \ud An example-based pose recovery approach is introduced where histograms of oriented gradients (HOG) are used as the image descriptor. From a database containing thousands of HOG-pose pairs, the visually closest examples are selected. Weighted interpolation of the corresponding poses is used to obtain the pose estimate. This approach is fast due to the use of a low-cost distance function. To cope with partial occlusions of the human figure, the normalization and matching of the HOG descriptors was changed from global to the cell level. When occlusion areas in the image are predicted, only part of the descriptor can be used for recovery, thus avoiding adaptation of the database to the occlusion setting.\ud \ud For the recognition of human actions, simple functions are used to discriminate between two classes after applying a common spatial patterns (CSP) transform on sequences of HOG descriptors. In the transform, the difference in variance between two classes is maximized. Each of the discriminative functions softly votes into the two classes. After evaluation of all pairwise functions, the action class that receives most of the voting mass is the estimated class. By combining the two approaches, actions could be recognized by considering sequences of recovered, rotation-normalized poses. Thanks to this normalization, actions could be recognized from arbitrary viewpoints. By handling occlusions in the pose recovery step, actions could be recognized from image observations where occlusion was simulated

    Automatic behavior analysis in tag games: from traditional spaces to interactive playgrounds

    Get PDF
    Tag is a popular children’s playground game. It revolves around taggers that chase and then tag runners, upon which their roles switch. There are many variations of the game that aim to keep children engaged by presenting them with challenges and different types of gameplay. We argue that the introduction of sensing and floor projection technology in the playground can aid in providing both variation and challenge. To this end, we need to understand players’ behavior in the playground and steer the interactions using projections accordingly. In this paper, we first analyze the behavior of taggers and runners in a traditional tag setting. We focus on behavioral cues that differ between the two roles. Based on these, we present a probabilistic role recognition model. We then move to an interactive setting and evaluate the model on tag sessions in an interactive tag playground. Our model achieves 77.96 % accuracy, which demonstrates the feasibility of our approach. We identify several avenues for improvement. Eventually, these should lead to a more thorough understanding of what happens in the playground, not only regarding player roles but also when the play breaks down, for example when players are bored or cheat

    Learn to cycle: Time-consistent feature discovery for action recognition

    Get PDF
    Generalizing over temporal variations is a prerequisite for effective action recognition in videos. Despite significant advances in deep neural networks, it remains a challenge to focus on short-term discriminative motions in relation to the overall performance of an action. We address this challenge by allowing some flexibility in discovering relevant spatio-temporal features. We introduce Squeeze and Recursion Temporal Gates (SRTG), an approach that favors inputs with similar activations with potential temporal variations. We implement this idea with a novel CNN block that uses an LSTM to encapsulate feature dynamics, in conjunction with a temporal gate that is responsible for evaluating the consistency of the discovered dynamics and the modeled features. We show consistent improvement when using SRTG blocks, with only a minimal increase in the number of GFLOPs. On Kinetics-700, we perform on par with current state-of-the-art models, and outperform these on HACS, Moments in Time, UCF-101 and HMDB-51

    Example-based pose estimation in monocular images using compact fourier descriptors

    Get PDF
    Automatically estimating human poses from visual input is useful but challenging due to variations in image space and the high dimensionality of the pose space. In this paper, we assume that a human silhouette can be extracted from monocular visual input. We compare the recovery performance of Fourier descriptors with a number of coefficients between 8 and 128, and two different sampling methods. An examplebased approach is taken to recover upper body poses from the descriptors. We test the robustness of our approach by investigating how shape deformations due to changes in body dimensions, viewpoint and noise affect the recovery of the pose. The average error per joint is approximately 16-17° for equidistant sampling and slightly higher for extreme point sampling. Increasing the number of descriptors does not have any influence on the performance. Noise and small changes in viewpoint have only a very small effect on the recovery performance but we obtain higher error scores when recovering poses using silhouettes from a person with different body dimensions
    corecore